操纵可变形的线性对象(DLOS)在有障碍的受约束环境中实现所需的形状是一项有意义但具有挑战性的任务。对于这项高度约束的任务是必要的;但是,由于规划人员的可变形性质,计划人员需要的准确模型很难获得,并且不可避免的建模错误会显着影响计划结果,如果机器人只是以开环的方式执行计划的路径,则可能导致任务失败。在本文中,我们提出了一个粗到精细的框架,以结合全球计划和局部控制,以进行双臂操纵DLO,能够精确实现所需的配置并避免DLO,机器人和障碍物之间的潜在碰撞。具体而言,全球规划师是指一个简单而有效的DLO能量模型,并计算出一条粗略的途径,以确保任务的可行性。然后,本地控制器遵循该路径作为指导,并通过闭环反馈进一步塑造它,以补偿计划错误并保证任务的准确性。仿真和现实世界实验都表明,我们的框架可以在使用不精确的DLO模型的受约束环境中稳健地实现所需的DLO配置。仅通过计划或控制就无法可靠地实现。
translated by 谷歌翻译
可变形线性对象(DLOS)的机器人操纵在许多领域都具有广泛的应用前景。但是,一个关键问题是获得确切的变形模型(即机器人运动如何影响DLO变形),这些模型在不同的DLOS之间很难计算和变化。因此,DLOS的形状控制具有挑战性,尤其是对于需要全球和更准确模型的大型变形控制。在本文中,我们提出了一种离线和在线数据驱动的方法,用于有效地学习全球变形模型,从而可以通过离线学习进行准确的建模,并通过在线适应进行新的DLOS进行进一步更新。具体而言,由神经网络近似的模型首先是在随机数据的离线训练中,然后无缝迁移到在线阶段,并在实际操纵过程中进一步在线更新。引入了几种策略,以提高模型的效率和泛化能力。我们提出了一个基于凸优化的控制器,并使用Lyapunov方法分析系统的稳定性。详细的仿真和现实世界实验表明,我们的方法可以有效,精确地估计变形模型,并在2D和3D双臂操纵任务中对未经训练的DLO进行大型变形控制,而不是现有方法。它仅使用仿真数据进行离线学习来完成所有24个任务,并在现实世界中不同的DLO上具有不同的所需形状。
translated by 谷歌翻译
空间冗余广泛存在于视觉识别任务中,即图像或视频帧中的判别特征通常对应于像素的子集,而剩余区域与手头的任务无关。因此,在时间和空间消耗方面,处理具有相等计算量的所有像素的静态模型导致相当冗余。在本文中,我们将图像识别问题标准为顺序粗致细特征学习过程,模仿人类视觉系统。具体地,所提出的浏览和焦点网络(GFNET)首先以低分辨率比例提取输入图像的快速全局表示,然后策略性地参加一系列突出(小)区域以学习更精细的功能。顺序过程自然地促进了在测试时间的自适应推断,因为一旦模型对其预测充分信心,可以终止它,避免了进一步的冗余计算。值得注意的是,在我们模型中定位判别区域的问题被制定为增强学习任务,因此不需要除分类标签之外的其他手动注释。 GFNET是一般的,灵活,因为它与任何现成的骨干网型号(例如MobileCenets,Abservennet和TSM)兼容,可以方便地部署为特征提取器。对各种图像分类和视频识别任务的广泛实验以及各种骨干模型,证明了我们方法的显着效率。例如,它通过1.3倍降低了高效MobileNet-V3的平均等待时间,而不会牺牲精度。代码和预先训练的模型可在https://github.com/blackfeather-wang/gfnet-pytorch获得。
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译
Despite some successful applications of goal-driven navigation, existing deep reinforcement learning-based approaches notoriously suffers from poor data efficiency issue. One of the reasons is that the goal information is decoupled from the perception module and directly introduced as a condition of decision-making, resulting in the goal-irrelevant features of the scene representation playing an adversary role during the learning process. In light of this, we present a novel Goal-guided Transformer-enabled reinforcement learning (GTRL) approach by considering the physical goal states as an input of the scene encoder for guiding the scene representation to couple with the goal information and realizing efficient autonomous navigation. More specifically, we propose a novel variant of the Vision Transformer as the backbone of the perception system, namely Goal-guided Transformer (GoT), and pre-train it with expert priors to boost the data efficiency. Subsequently, a reinforcement learning algorithm is instantiated for the decision-making system, taking the goal-oriented scene representation from the GoT as the input and generating decision commands. As a result, our approach motivates the scene representation to concentrate mainly on goal-relevant features, which substantially enhances the data efficiency of the DRL learning process, leading to superior navigation performance. Both simulation and real-world experimental results manifest the superiority of our approach in terms of data efficiency, performance, robustness, and sim-to-real generalization, compared with other state-of-art baselines. Demonstration videos are available at \colorb{https://youtu.be/93LGlGvaN0c.
translated by 谷歌翻译
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks, and various methods have been proposed for the defense. Among these methods, adversarial training has been drawing increasing attention because of its simplicity and effectiveness. However, the performance of the adversarial training is greatly limited by the architectures of target DNNs, which often makes the resulting DNNs with poor accuracy and unsatisfactory robustness. To address this problem, we propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training. In particular, we design a novel cell-based search space specially for adversarial training, which improves the accuracy and the robustness upper bound of the searched architectures by carefully designing the placement of the cells and the proportional relationship of the filter numbers. Then we propose a two-stage search strategy to search for both accurate and robust neural architectures. At the first stage, the architecture parameters are optimized to minimize the adversarial loss, which makes full use of the effectiveness of the adversarial training in enhancing the robustness. At the second stage, the architecture parameters are optimized to minimize both the natural loss and the adversarial loss utilizing the proposed multi-objective adversarial training method, so that the searched neural architectures are both accurate and robust. We evaluate the proposed algorithm under natural data and various adversarial attacks, which reveals the superiority of the proposed method in terms of both accurate and robust architectures. We also conclude that accurate and robust neural architectures tend to deploy very different structures near the input and the output, which has great practical significance on both hand-crafting and automatically designing of accurate and robust neural architectures.
translated by 谷歌翻译
A crucial issue of current text generation models is that they often uncontrollably generate factually inconsistent text with respective of their inputs. Limited by the lack of annotated data, existing works in evaluating factual consistency directly transfer the reasoning ability of models trained on other data-rich upstream tasks like question answering (QA) and natural language inference (NLI) without any further adaptation. As a result, they perform poorly on the real generated text and are biased heavily by their single-source upstream tasks. To alleviate this problem, we propose a weakly supervised framework that aggregates multiple resources to train a precise and efficient factual metric, namely WeCheck. WeCheck first utilizes a generative model to accurately label a real generated sample by aggregating its weak labels, which are inferred from multiple resources. Then, we train the target metric model with the weak supervision while taking noises into consideration. Comprehensive experiments on a variety of tasks demonstrate the strong performance of WeCheck, which achieves a 3.4\% absolute improvement over previous state-of-the-art methods on TRUE benchmark on average.
translated by 谷歌翻译
Text style transfer aims to alter the style of a sentence while preserving its content. Due to the lack of parallel corpora, most recent work focuses on unsupervised methods and often uses cycle construction to train models. Since cycle construction helps to improve the style transfer ability of the model by rebuilding transferred sentences back to original-style sentences, it brings about a content loss in unsupervised text style transfer tasks. In this paper, we propose a novel disentanglement-based style transfer model StyleFlow to enhance content preservation. Instead of the typical encoder-decoder scheme, StyleFlow can not only conduct the forward process to obtain the output, but also infer to the input through the output. We design an attention-aware coupling layers to disentangle the content representations and the style representations of a sentence. Besides, we propose a data augmentation method based on Normalizing Flow to improve the robustness of the model. Experiment results demonstrate that our model preserves content effectively and achieves the state-of-the-art performance on the most metrics.
translated by 谷歌翻译
Answering complex logical queries on incomplete knowledge graphs is a challenging task, and has been widely studied. Embedding-based methods require training on complex queries, and cannot generalize well to out-of-distribution query structures. Recent work frames this task as an end-to-end optimization problem, and it only requires a pretrained link predictor. However, due to the exponentially large combinatorial search space, the optimal solution can only be approximated, limiting the final accuracy. In this work, we propose QTO (Query Tree Optimization) that can efficiently find the exact optimal solution. QTO finds the optimal solution by a forward-backward propagation on the tree-like computation graph, i.e., query tree. In particular, QTO utilizes the independence encoded in the query tree to reduce the search space, where only local computations are involved during the optimization procedure. Experiments on 3 datasets show that QTO obtains state-of-the-art performance on complex query answering, outperforming previous best results by an average of 22%. Moreover, QTO can interpret the intermediate solutions for each of the one-hop atoms in the query with over 90% accuracy.
translated by 谷歌翻译